Goto

Collaborating Authors

 additional data







Policy Finetuning in Reinforcement Learning via Design of Experiments using Offline Data

Neural Information Processing Systems

In some applications of reinforcement learning, a dataset of pre-collected experience is already availablebut it is also possible to acquire some additional online data to help improve the quality of the policy.However, it may be preferable to gather additional data with a single, non-reactive exploration policyand avoid the engineering costs associated with switching policies. In this paper we propose an algorithm with provable guarantees that can leverage an offline dataset to design a single non-reactive policy for exploration. We theoretically analyze the algorithm and measure the quality of the final policy as a function of the local coverage of the original dataset and the amount of additional data collected.


Improving the learning process and providing more accurate similarity matrices for unannotated data can positively

Neural Information Processing Systems

We sincerely thank the reviewers for their valuable comments. We proofread and fixed the mentioned errors. Related Work: Thank you for the additional references. We will include and discuss them in the revised version. Publishing codes: Upon the acceptance of our paper, we will publicly release the source codes.